Convergence rate estimates for the gradient differential inclusion

نویسندگان

چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Convergence rate estimates for the gradient differential inclusion

Let f : H → R ∪ {∞} be a proper, lower semi–continuous, convex function in a Hilbert space H. The gradient differential inclusion is x′(t) ∈ −∂f(x(t)), x(0) = x, where x ∈ dom(f). If f is differentiable, the inclusion can be considered as the continuous version of the steepest descent method for minimizing f on H. Even if f is not differentiable, the inclusion has a unique solution {x(t) : t > ...

متن کامل

Convergence Estimates for Preconditioned Gradient Subspace Iteration Eigensolvers

Subspace iteration for computing several eigenpairs (i.e. eigenvalues and eigenvectors) of an eigenvalue problem is an alternative to the deflation technique whereby the eigenpairs are computed successively by projecting the problem onto the subspace orthogonal to the already found eigenvectors. The main advantage of the subspace iteration over the deflation is its ‘cluster robustness’: even if...

متن کامل

Improvement of the rate of convergence estimates for multigrid algorithm

In this paper, we present a new convergence rate for both symmetric and nonsymmetric positive definite problems. In our theory, ‘‘regularity and approximation’’ assumption is used. A new rate of convergence estimate with a 1⁄4 1 2 is used where a is a ‘‘regularity and approximation’’ parameter. Also, a new convergence rate is given by two-grid schemes. 2006 Elsevier Inc. All rights reserved.

متن کامل

Theoretical rate of convergence for interval inclusion functions

Geometric branch-and-bound methods are commonly used solution algorithms for non-convex global optimization problems in small dimensions, say for problems with up to six or ten variables, and the efficiency of these methods depends on some required lower bounds. For example, in interval branch-and-boundmethods variouswell-known lower bounds are derived from interval inclusion functions. The aim...

متن کامل

Convergence rate of incremental aggregated gradient algorithms

Motivated by applications to distributed asynchronous optimization and large-scale data processing, we analyze the incremental aggregated gradient method for minimizing a sum of strongly convex functions from a novel perspective, simplifying the global convergence proofs considerably and proving a linear rate result. We also consider an aggregated method with momentum and show its linear conver...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Optimization Methods and Software

سال: 2005

ISSN: 1055-6788,1029-4937

DOI: 10.1080/10556780500094770